Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 28
1.
Appl Psychol Meas ; 47(7-8): 443-459, 2023 Nov.
Article En | MEDLINE | ID: mdl-37997580

The present paper introduces a random weights linear logistic test model for the measurement of individual differences in operation-specific practice effects within a single administration of a test. The proposed model is an extension of the linear logistic test model of learning developed by Spada (1977) in which the practice effects are considered random effects varying across examinees. A Bayesian framework was used for model estimation and evaluation. A simulation study was conducted to examine the behavior of the model in combination with the Bayesian procedures. The results demonstrated the good performance of the estimation and evaluation methods. Additionally, an empirical study was conducted to illustrate the applicability of the model to real data. The model was applied to a sample of responses from a logical ability test providing evidence of individual differences in operation-specific practice effects.

2.
Educ Psychol Meas ; 83(4): 782-807, 2023 Aug.
Article En | MEDLINE | ID: mdl-37398844

The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and incorrect responses, which allows for distinguishing different types of learning effects in the data. Model estimation and evaluation is based on a Bayesian framework. A simulation study is presented that examines the performance of the estimation and evaluation methods. The results show accuracy in parameter recovery as well as good performance in model evaluation and selection. An empirical study illustrates the applicability of the model to data from a logical ability test.

3.
Front Psychol ; 13: 982137, 2022.
Article En | MEDLINE | ID: mdl-36533017

Educational and psychological measurement is typically based on dichotomous variables or rating scales comprising a few ordered categories. When the mean of the observed responses approaches the upper or the lower bound of the scale, the distribution of the data becomes skewed and, if a categorical factor model holds in the population, the Pearson correlation between variables is attenuated. The consequence of this correlation attenuation is that the traditional linear factor model renders an excessive number of factors. This article presents the results of a simulation study investigating the problem of overfactoring and some solutions. We compare five widely known approaches: (1) The maximum-likelihood factor analysis (FA) model for normal data, (2) the categorical factor analysis (FAC) model based on polychoric correlations and maximum likelihood (ML) estimation, (3) the FAC model estimated using a weighted least squares algorithm, (4) the mean corrected chi-square statistic by Satorra-Bentler to handle the lack of normality, and (5) the Samejima's graded response model (GRM) from item response theory (IRT). Likelihood-ratio chi-square, parallel analysis (PA), and categorical parallel analysis (CPA) are used as goodness-of-fit criteria to estimate the number of factors in the simulation study. Our results indicate that the maximum-likelihood estimation led to overfactoring in the presence of skewed variables both for the linear and categorical factor model. The Satorra-Bentler and GRM constitute the most reliable alternatives to estimate the number of factors.

4.
Front Psychol ; 13: 923877, 2022.
Article En | MEDLINE | ID: mdl-36092049

Bifactor latent models have gained popularity and are widely used to model construct multidimensionality. When adopting a confirmatory approach, a common practice is to assume that all cross-loadings take zero values. This article presents the results of a simulation study exploring the impact of ignoring non-zero cross-loadings on the performance of confirmatory bifactor analysis. The present work contributes to previous research by including study conditions that had not been examined before. For instance, a wider range of values of the factor loadings both for the group factors and the cross-loadings is considered. Parameter recovery is analyzed, but the focus of the study is on assessing the sensitivity of goodness-of-fit indices to detect the model misspecification that involves ignoring non-zero cross-loadings. Several commonly used SEM fit indices are examined: both biased estimators of the fit index (CFI, GFI, and SRMR) and unbiased estimators (RMSEA and SRMR). Results indicated that parameter recovery worsens when ignoring moderate and large cross-loading values and using small sample sizes, and that commonly used SEM fit indices are not useful to detect such model misspecifications. We recommend the use of the unbiased SRMR index with a cutoff value adjusted by the communality level (R 2), as it is the only fit index sensitive to the model misspecification due to ignoring non-zero cross-loadings in the bifactor model. The results of the present study provide insights into modeling cross-loadings in confirmatory bifactor models but also practical recommendations to researchers.

5.
Multivariate Behav Res ; 57(1): 57-78, 2022.
Article En | MEDLINE | ID: mdl-32804553

The particularities of bounded data are often overlooked. This type of data is likely to display a pattern of skewness because of the existence of an upper and lower limit that cannot be exceeded. In the context of factor analysis, when variables are skewed in opposite directions, using normal-theory factor analysis might lead to over-factoring. We propose a Bayesian beta factor model to analyze doubly bounded data. A simulation study was conducted to evaluate the performance of the normal and beta factor models in the presence of skewed variables. Two Bayesian approaches to model evaluation methods are considered, posterior predictive checking and three information criterion measures (DIC, WAIC, and LOO). The number of estimated factors based on the Bayesian methods is compared for the normal and beta factor models. An application of the model using real data is also presented. We found that the beta factor model constitutes a suitable alternative to analyze data with a pattern of mixed skewness. Posterior predictive checking appears to be a viable option to select the optimal number of factors in Bayesian factor analysis.


Bayes Theorem , Computer Simulation , Factor Analysis, Statistical
6.
Educ Psychol Meas ; 81(6): 1054-1088, 2021 Dec.
Article En | MEDLINE | ID: mdl-34565816

Situational judgment tests have gained popularity in educational and psychological measurement and are widely used in personnel assessment. A situational judgment item presents a hypothetical scenario and a list of actions, and the individuals are asked to select their most likely action for that scenario. Because actions have no explicit order, the item generates nominal responses consisting of the actions selected by the individuals. This article shows how to factor-analyze the nominal responses originated from such a test, including the estimation of the number of latent factors and a factor invariance analysis in a multiple group design. The method consists of applying the MNCM, a multidimensional extension of the nominal categories model by Bock. The article includes the results of two studies: (1) a simulation study about Type-I error rate, statistical power, and recovery of the parameters in a multigroup factorial invariance design and (2) a real data example using responses to a situational judgment test measuring gender stereotypes to illustrate the approach. Results suggest the use of the Akaike information criterion, Bayesian information criterion, and corrected Bayesian information criterion indices to guide the selection of the number of factors with nominal responses. All the analyses are conducted using the computer program Mplus. The code is included as Supplemental Material (available online) for the readers so that they can adapt it to their own purposes.

7.
Psychometrika ; 86(4): 994-1015, 2021 12.
Article En | MEDLINE | ID: mdl-34460068

The present paper introduces a new explanatory item response model to account for the learning that takes place during a psychometric test due to the repeated use of the operations involved in the items. The proposed model is an extension of the operation-specific learning model (Fischer and Formann in Appl Psychol Meas 6:397-416, 1982; Scheiblechner in Z für Exp Angew Psychol 19:476-506, 1972; Spada in Spada and Kempf (eds.) Structural models of thinking and learning, Huber, Bern, Germany, pp 227-262, 1977). The paper discusses special cases of the model, which, together with the general formulation, differ in the type of response in which the model states that learning occurs: (1) correct and incorrect responses equally (non-contingent learning); (2) correct responses only (contingent learning); and (3) correct and incorrect responses to a different extent (differential contingent learning). A Bayesian framework is adopted for model estimation and evaluation. A simulation study is conducted to examine the performance of the estimation and evaluation methods in recovering the true parameters and selecting the true model. Finally, an empirical study is presented to illustrate the applicability of the model to detect learning effects using real data.


Learning , Bayes Theorem , Computer Simulation , Psychometrics
8.
Rev. esp. med. legal ; 45(3): 98-107, jul.-sept. 2019. tab, graf
Article Es | IBECS | ID: ibc-188610

Introducción: En este artículo se analizan los datos del equipo técnico (psicólogo, trabajador social y educador) del juzgado de menores de Toledo (España) desde el año 2001 a 2012. Metodología: Estudio descriptivo de la serie anual de una muestra de 3.333 menores infractores y de sus características psicosociales, educativas y delictivas. Resultados: En los últimos años de la serie se tiende a infracciones de mayor gravedad cometidas por chicos de mayor edad. Conclusiones: La delincuencia de menores en esta provincia desde el año 2001 se mantuvo constante hasta el año 2009, en el que se observa un destacado repunte para volver a bajar durante los años 2011 y 2012. Sin embargo, en los últimos años de la serie (a partir de 2008) se constata un aumento de la gravedad de los hechos, al ser significativamente más numerosos los delitos que las faltas. También se ha detectado que probablemente el sistema infradiagnostica a los menores infractores, tanto en problemas de salud mental como en consumo de alcohol y otras drogas. Por lo tanto, estos menores no se beneficiarían adecuadamente de las medidas impuestas por el sistema de justicia juvenil


Introduction: This paper presents an analysis of the data of the technical team (psychologist, social worker and educator) of the juvenile court of Toledo (Spain) from 2001 to 2012. Methodology: Descriptive study of the annual series of a sample of 3,333 juvenile offenders and their psychosocial, educational and criminal characteristics. Results: In the latter years of the series there tends to be infractions of greater gravity committed by older boys. Conclusions: Juvenile delinquency in the province of Toledo remained constant from 2001 until 2009, when a sharp increase was observed, which lowered again during the years 2011 and 2012. However, in the latter years of the series (starting 2008) an increase in the gravity of the acts is noted, with the number of crimes increasing more significantly than the misdemeanours. The team has also detected that perhaps institutions underestimate the diagnosis of the juvenile misdemeanours, even in mental health issues, such as abuse of alcohol and other substances. Therefore, these minors may not achieve the benefits of the justice youth system


Humans , Male , Female , Adolescent , Juvenile Delinquency/statistics & numerical data , Psychosocial Impact , Crime/classification , Crime/statistics & numerical data , Spain/epidemiology , Socioeconomic Factors
9.
Span J Psychol ; 21: E63, 2018 Dec 03.
Article En | MEDLINE | ID: mdl-30501654

This article describes the development of the ENCUIST (Extroversion, Neuroticism, Callous-Unemotional, Instability, Short-Test) questionnaire, which has been created to provide a personality profiling method based on a cognitive diagnostic modeling framework. The ENCUIST measures the attributes of extroversion, neuroticism, callous unemotionality and overt expressions of anger that are relevant in a forensic context. The scores provided by the ENCUIST are binary classifications of the individuals (high/low) in these attributes. The ENCUIST was developed using a sample of 516 subjects to study its validation through psychometric procedures, including factor analysis, cognitive diagnostic modeling and structural equation modeling. The results supported a four-factor structure. Linear regressions were used to evaluate the predictive validity of the scores provided by ENCUIST with respect to two external criteria that are relevant in the forensic context, namely behavioral activation and behavioral inhibition. The results showed that the extroversion dimension is positively related to behavioral activation, although the effect size is modest and the proportion of explained variance is only 11%. Moreover, the dimensions of neuroticism and anger expression are positively related to behavioral inhibition, with 7% of the variance explained. Together, these results suggest that cognitive diagnostic models are useful tools for the elaboration of personality profiles based on classifying subjects along binary attributes.


Personality Tests , Psychometrics/methods , Adult , Female , Humans , Male , Models, Statistical , Psychometrics/instrumentation , Young Adult
10.
Span. j. psychol ; 21: e63.1-e63.13, 2018. tab
Article En | IBECS | ID: ibc-189178

This article describes the development of the ENCUIST (Extroversion, Neuroticism, Callous-Unemotional, Instability, Short-Test) questionnaire, which has been created to provide a personality profiling method based on a cognitive diagnostic modeling framework. The ENCUIST measures the attributes of extroversion, neuroticism, callous unemotionality and overt expressions of anger that are relevant in a forensic context. The scores provided by the ENCUIST are binary classifications of the individuals (high/low) in these attributes. The ENCUIST was developed using a sample of 516 subjects to study its validation through psychometric procedures, including factor analysis, cognitive diagnostic modeling and structural equation modeling. The results supported a four-factor structure. Linear regressions were used to evaluate the predictive validity of the scores provided by ENCUIST with respect to two external criteria that are relevant in the forensic context, namely behavioral activation and behavioral inhibition. The results showed that the extroversion dimension is positively related to behavioral activation, although the effect size is modest and the proportion of explained variance is only 11%. Moreover, the dimensions of neuroticism and anger expression are positively related to behavioral inhibition, with 7% of the variance explained. Together, these results suggest that cognitive diagnostic models are useful tools for the elaboration of personality profiles based on classifying subjects along binary attributes


No disponible


Humans , Male , Female , Young Adult , Adult , Personality Tests , Psychometrics/methods , Psychometrics/instrumentation , Models, Statistical
11.
Front Psychol ; 8: 961, 2017.
Article En | MEDLINE | ID: mdl-28670291

This article introduces Bayesian estimation and evaluation procedures for the multidimensional nominal response model. The utility of this model is to perform a nominal factor analysis of items that consist of a finite number of unordered response categories. The key aspect of the model, in comparison with traditional factorial model, is that there is a slope for each response category on the latent dimensions, instead of having slopes associated to the items. The extended parameterization of the multidimensional nominal response model requires large samples for estimation. When sample size is of a moderate or small size, some of these parameters may be weakly empirically identifiable and the estimation algorithm may run into difficulties. We propose a Bayesian MCMC inferential algorithm to estimate the parameters and the number of dimensions underlying the multidimensional nominal response model. Two Bayesian approaches to model evaluation were compared: discrepancy statistics (DIC, WAICC, and LOO) that provide an indication of the relative merit of different models, and the standardized generalized discrepancy measure that requires resampling data and is computationally more involved. A simulation study was conducted to compare these two approaches, and the results show that the standardized generalized discrepancy measure can be used to reliably estimate the dimensionality of the model whereas the discrepancy statistics are questionable. The paper also includes an example with real data in the context of learning styles, in which the model is used to conduct an exploratory factor analysis of nominal data.

12.
Psicológica (Valencia, Ed. impr.) ; 38(1): 25-55, 2017. tab, graf
Article En | IBECS | ID: ibc-161212

This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MMLEM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional item response models of various levels of complexity. This paper evaluates the performance of parameter recovery via three simulation studies from a Bayesian approach. The first simulation uses a very simple unidimensional model to evaluate the effect of diffuse and concentrated prior distributions on recovery. The second study compares the MHRM algorithm with MML-EM and MCMC in the estimation of an itemresponse model with a moderate number of correlated dimensions. The third simulation evaluates the performance of the MHRM, HMC, MML-EM and MCMC algorithms in the estimation of an item response model in a highdimensional latent space. The results showed that MML-EM loses precision with high-dimensional models whereas the other three algorithms recover the true parameters with similar precision. Apart from this, the main differences between algorithms are: 1) estimation time is much shorter for MHRM than for the other algorithms, 2) MHRM achieves the best precision in all conditions and is less affected by prior distributions, and 3) prior distributions for the slopes in the MCMC and HMC algorithms should be carefully defined in order to avoid problems of factor orientation. In summary, the new algorithms seem to overcome the difficulties of the traditional ones by converging faster and producing accurate results (AU)


El presente estudio compara el rendimiento de dos algoritmos de estimación de reciente implementación, Metropolis-Hastings Robins-Monro (MHRM) y Hamiltonian MCMC (HMC), con dos algoritmos consolidados en la literatura psicométrica, máxima verosimilitud marginal a través del algoritmo EM (MML-EM) y las cadenas de Markov de Monte Carlo (MCMC), en la estimación de modelos multidimensionales de respuesta al ítem de diferente complejidad. Para evaluar la recuperación de parámetros se plantearon tres estudios de simulación desde un acercamiento Bayesiano. El primer estudio utiliza un modelo unidimensional sencillo para evaluar el efecto de distribuciones previas informativas y no informativas. El segundo estudio compara el algoritmo MHRM con MML-EM y MCMC en la estimación de un modelo de respuesta al ítem con un número moderado de dimensiones correlacionadas. El tercer estudio evalúa el desempeño de los algoritmos MHRM, HMC, MML-EM y MCMC en la estimación de un modelo de respuesta al ítem de alta dimensionalidad. Los resultados ponen de manifiesto que MML-EM pierde precisión con modelos de alta dimensionalidad mientras que los otros tres algoritmos recuperan los parámetros verdaderos con una precisión similar. Además, las principales diferencias encontradas entre los algoritmos fueron: 1) MHRM tarda mucho menos en estimar el modelo que el resto de algoritmos; 2) MHRM se muestra más preciso y menos afectado por las distribuciones previas en sus estimaciones; y 3) las distribuciones previas para los parámetros a en los algoritmos MCMC y HMC deben definirse con cuidado para evitar problemas de orientación de los factores. En resumen, los nuevos algoritmos parecen superar las dificultades de los tradicionales, convergiendo más rápido y obteniendo resultados similares (AU)


Bayes Theorem , Psychiatric Status Rating Scales/statistics & numerical data , Algorithms , Psychometrics/statistics & numerical data , Models, Psychological , Factor Analysis, Statistical , Multivariate Analysis , 28574/methods
13.
Psicothema (Oviedo) ; 28(3): 346-352, ago. 2016. tab, ilus, graf
Article En | IBECS | ID: ibc-154633

BACKGROUND: Multistage adaptive testing has recently emerged as an alternative to the computerized adaptive test. The current study details a new multistage test to assess fluid intelligence. METHOD: An item pool of progressive matrices with constructed response format was developed, and divided into six subtests. The subtests were applied to a sample of 724 college students and their psychometric properties were studied (i.e., reliability, dimensionality and validity evidence). The item pool was calibrated under the graded response model, and two multistage structures were developed, based on the automatic test assembly principles. Finally, the test information provided by each structure was compared in order to select the most appropriate one. RESULTS: The item pool showed adequate psychometric properties. From the two compared multistage structures, the simplest structure (i.e., routing test and two modules in the next stages) were more informative across the latent trait continuum and were therefore kept. DISCUSSION: Taken together, the results of the two studies support the application of the FIMT (Fluid Intelligence Multistage Test), a multistage test to assess fluid intelligence accurately and innovatively


ANTECEDENTES: los test adaptativos multietapa han emergido recientemente como una alternativa a los test adaptativos informatizados. Se presenta en este estudio un test multietapa para evaluar la inteligencia fluida. MÉTODO: se desarrolló un banco de ítems de matrices progresivas con formato de respuesta construida que posteriormente fue dividido en seis subtests. Los ítems se administraron a un total de 724 estudiantes universitarios. Se estudiaron las propiedades psicométricas de los subtests (fiabilidad, dimensionalidad, evidencias de validez) y se calibró el banco con el modelo de respuesta graduada. Se construyeron después dos estructuras multietapa a través del ensamblaje automático de tests y se comparó la información proporcionada por cada una de ellas. RESULTADOS: los ítems mostraron unas propiedades psicométricas adecuadas. De las dos estructuras puestas a prueba, se conservó finalmente la estructura sencilla, pues resultó más informativa. DISCUSIÓN: los resultados de estos dos estudios avalan el empleo del FIMT, una herramienta que emplea este formato para evaluar de forma innovadora y precisa la inteligencia fluida


Humans , Intelligence , Intelligence Tests , Psychometrics/instrumentation , Psychological Tests , Mental Processes , Reproducibility of Results , Reproducibility of Results
14.
Psicothema ; 28(3): 346-52, 2016 Aug.
Article En | MEDLINE | ID: mdl-27448271

BACKGROUND: Multistage adaptive testing has recently emerged as an alternative to the computerized adaptive test. The current study details a new multistage test to assess fluid intelligence. METHOD: An item pool of progressive matrices with constructed response format was developed, and divided into six subtests. The subtests were applied to a sample of 724 college students and their psychometric properties were studied (i.e., reliability, dimensionality and validity evidence). The item pool was calibrated under the graded response model, and two multistage structures were developed, based on the automatic test assembly principles. Finally, the test information provided by each structure was compared in order to select the most appropriate one. RESULTS: The item pool showed adequate psychometric properties. From the two compared multistage structures, the simplest structure (i.e., routing test and two modules in the next stages) were more informative across the latent trait continuum and were therefore kept. DISCUSSION: Taken together, the results of the two studies support the application of the FIMT (Fluid Intelligence Multistage Test), a multistage test to assess fluid intelligence accurately and innovatively.


Intelligence Tests , Adolescent , Adult , Female , Humans , Male , Psychometrics , Young Adult
15.
Psicológica (Valencia, Ed. impr.) ; 35(1): 149-169, 2014. tab, ilus
Article En | IBECS | ID: ibc-118513

Cognitive Diagnostic Models (CDMs) aim to provide information about the degree to which individuals have mastered specific attributes that underlie the success of these individuals on test items. The Q-matrix is a key element in the application of CDMs, because contains links item-attributes representing the cognitive structure proposed for solve the test. Using a simulation study we investigated the performance of two model-fit statistics (MAD and LSD) to detect misspecifications in the Q-matrix within the least squares distance modeling framework. The manipulated test design factors included the number of respondents (300, 500, 1000), attributes (1, 2, 3, 4), and type of model (conjunctive vs disjunctive). We investigated MAD and LSD behavior under correct Q-matrix specification, with Qmisspecifications and in a real data application. The results shows that the two model-fit indexes were sensitive to Q-misspecifications, consequently, cut points were proposed to use in applied context (AU)


Los Modelos de Diagnóstico Cognitivo (MDC) tienen por objeto proporcionar información sobre el grado en que los individuos dominan atributos específicos para resolver correctamente los items de un test. La matriz Q es un elemento clave en la aplicación de los MDC porque contiene vínculos entre items y atributos que representan la estructura cognitiva propuesta para resolver la prueba. Por medio de un estudio de simulación, se determinó el rendimiento de dos estadísticos de ajuste (MAD y LSD) para detectar errores de especificación en la matriz Q dentro del marco del modelo de la distancia mínimo cuadrática. Los factores manipulados en el diseño del test incluyen: número de encuestados (300, 500, 1000), número de atributos (1, 2, 3, 4), y el tipo de modelo (conjuntivo vs disyuntivo). Se investigó el comportamiento de los valores MAD y LSD bajo una correcta especificación de Q, con errores de especificación en Q y en una aplicación de datos reales. Los resultados muestran que los dos índices son sensibles a los errores de especificación de Q, por este motivo se proponen puntos de corte para usar en aplicaciones del Modelo (AU)


Humans , Male , Female , Validation Studies as Topic , Psychological Tests/standards , Cognitive Behavioral Therapy/methods , Cognitive Science/instrumentation , Cognitive Science/methods , Cognitive Science/organization & administration , Cognitive Reserve
16.
Br J Math Stat Psychol ; 65(1): 32-55, 2012 Feb.
Article En | MEDLINE | ID: mdl-22233175

Items that are clustered according to shared content may violate the principle of conditional independence commonly used in item response theory. This paper investigates the capabilities of a logistic item response model in relation to locally dependent item responses. The model includes main effect and interaction parameters that are computed as linear functions of the latent trait. The paper explains the interpretation of the parameters, the maximum likelihood estimation algorithm, the information matrix and some results concerning parameter identifiability. The problem of over-fitting the data is addressed in a simulation study, and two real data examples are described to illustrate the approach, one from the context of a sample survey and the other from ability testing using testlets.


Data Interpretation, Statistical , Likelihood Functions , Logistic Models , Computer Simulation/statistics & numerical data , Data Collection/statistics & numerical data , Education/statistics & numerical data , Humans , Religion
17.
Span J Psychol ; 14(1): 207-17, 2011 May.
Article En | MEDLINE | ID: mdl-21568178

The present paper aimed to examine questionnaire response patterns and objective task-based test behavioral patterns in order to analyze the differences people show in consistency. It is hypothesized that people tend to be more consistent when talking about themselves (when describing themselves through verbal statements) that when they solve a task (when behaving). Consistency is computed using the pi* statistic (Hernandez, Rubio, Revuelta, & Santacreu, 2006). According to this procedure, consistency is defined as the value and the dimensionality of the latent trait of an individual (theta) remaining invariant through out the test of. Participants who are consistent must show a constant theta and follow a given response pattern during the entire course of the test. A sample of 3,972 participants was used. Results reveal that 68% of participants showed a consistent response pattern when completing the questionnaire. When tackling the task-based test, the percentage was 66%. 45% of individuals showed a consistent pattern in both tests. Implications for personality and individual differences assessment are discussed.


Behavior , Character , Personality Assessment/statistics & numerical data , Personality Inventory/statistics & numerical data , Problem Solving , Self Concept , Surveys and Questionnaires , Verbal Behavior , Adult , Female , Humans , Individuality , Male , Probability , Psychometrics/statistics & numerical data , Reproducibility of Results
18.
Span. j. psychol ; 14(1): 207-210, mayo 2011. tab, ilus
Article En | IBECS | ID: ibc-96467

The present paper aimed to examine questionnaire response patterns and objective task-based test behavioral patterns in order to analyze the differences people show in consistency. It is hypothesized that people tend to be more consistent when talking about themselves (when describing themselves through verbal statements) that when they solve a task (when behaving). Consistency is computed using the Pi* statistic (Hernandez, Rubio, Revuelta, & Santacreu, 2006). According to this procedure, consistency is defined as the value and the dimensionality of the latent trait of an individual (theta) remaining invariant through out the test of. Participants who are consistent must show a constant theta and follow a given response pattern during the entire course of the test. A sample of 3,972 participants was used. Results reveal that 68% of participants showed a consistent response pattern when completing the questionnaire. When tackling the task-based test, the percentage was 66%. 45% of individuals showed a consistent pattern in both tests. Implications for personality and individual differences assessment are discussed (AU)


El presente artículo pretende examinar las diferencias que muestran las personas en cuanto a su consistencia entre los patrones de respuesta a un cuestionario y los patrones de respuesta ante un test objetivo basado en tareas. Se hipotetiza que las personas tienden a ser más consistentes cuando hablan sobre ellos mismos (es decir, cuando se describen en base a declaraciones verbales, como es el caso de los cuestionarios) que cuando resuelven una tarea (cuando se comportan). La consistencia se calcula utilizando el estadístico Pi* (Hernandez, Rubio, Revuelta, & Santacreu, 2006). De acuerdo con este procedimiento, la consistencia se define como la invarianza del valor y la dimensionalidad del rasgo latente de un individuo (theta) a lo largo de un test. Los participantes que son consistentes mostrarán una theta constante y seguirán un patrón de respuesta dado a lo largo del curso completo del test. Para este estudio se utilizó una muestra de 3972 personas. Los resultados muestran que el 68% de los participantes mostraron un patrón de respuesta consistente cuando cumplimentaron el cuestionario. Sin embargo, cuando se enfrentaron al test objetivo basado en tareas, el porcentaje fue del 66%. El 45% de los participantes mostraron un patrón consistente en los dos tests. El artículo analiza las implicaciones para la evaluación de la personalidad y las diferencias individuales (AU)


Humans , Male , Female , Task Performance and Analysis , Personality/physiology , Human Characteristics , Language , Affective Symptoms/psychology , Psychometrics/methods , Psychometrics/trends , Surveys and Questionnaires , Emotions/physiology , Psychometrics/instrumentation , Adaptation, Psychological/physiology , Psychopathology/trends
19.
Span J Psychol ; 13(1): 485-93, 2010 May.
Article En | MEDLINE | ID: mdl-20480714

An important methodological concern of any research based on a person-environment (P-E) fit approach is the operationalization of the fit, which imposes some measurement requirements that are rarely empirically tested with statistical methods. Among them, the assessment of the P and E components along commensurate dimensions is possibly the most cited one. This paper proposes to test the equivalence across the P and E measures by analyzing the measurement invariance of a multi-group confirmatory factor analysis model. From a methodological point of view, the distinct aspect of this approach within the context of P-E fit research is that measurement invariance is assessed in a repeated measures design. An example illustrating the procedure in a person-organization (P-O) fit dataset is provided. Measurement invariance was tested at five different hierarchical levels: (1) configural, (2) first-order factor loadings, (3) second-order factor loadings, (4) residual variances of observed variables, and (5) disturbances of first-order factors. The results supported the measurement invariance across the P and O measures at the third level. The implications of these findings for P-E fit studies are discussed.


Adaptation, Psychological , Organizational Culture , Personality Inventory/statistics & numerical data , Social Environment , Achievement , Adult , Female , Humans , Job Satisfaction , Male , Models, Psychological , Organizational Objectives , Personal Autonomy , Psychometrics/statistics & numerical data , Reproducibility of Results , Safety , Spain
20.
Span. j. psychol ; 13(1): 485-493, mayo 2010. tab, ilus
Article En | IBECS | ID: ibc-79665

An important methodological concern of any research based on a person-environment (P-E) fit approach is the operationalization of the fit, which imposes some measurement requirements that are rarely empirically tested with statistical methods. Among them, the assessment of the P and E components along commensurate dimensions is possibly the most cited one. This paper proposes to test the equivalence across the P and E measures by analyzing the measurement invariance of a multi-group confirmatory factor analysis model. From a methodological point of view, the distinct aspect of this approach within the context of P-E fit research is that measurement invariance is assessed in a repeated measures design. An example illustrating the procedure in a person-organization (P-O) fit dataset is provided. Measurement invariance was tested at five different hierarchical levels: (1) configural, (2) first-order factor loadings, (3) second-order factor loadings, (4) residual variances of observed variables, and (5) disturbances of first-order factors. The results supported the measurement invariance across the P and O measures at the third level. The implications of these findings for P-E fit studies are discussed (AU)


Un importante aspecto metodológico de las investigaciones basadas en el enfoque del ajuste persona-ambiente (P-A) se refiere a la operacionalización del ajuste P-A, que impone una serie de supuestos que raramente se evalúan utilizando métodos estadísticos. Entre ellos, la evaluación de los componentes P y A en dimensiones conmensurables o equivalentes, es posiblemente uno de los más citados. En este artículo se propone que la equivalencia entre las medidas de P y A puede evaluarse mediante el análisis de la invarianza factorial de un modelo de análisis factorial confirmatorio multi-grupo. Desde un punto de vista metodológico, la aportación del enfoque propuesto a la investigación del ajuste P-A es que la invarianza en la medida se evalúa en un diseño de medidas repetidas. El artículo también presenta un ejemplo que ilustra el procedimiento propuesto con datos para una medida de ajuste persona-organización (P-O). La invarianza en la medida se evalúa en cinco niveles jerárquicos: (1) configural, (2) saturaciones de primer orden, (3) saturaciones de segundo orden, (4) varianzas error de las variables observadas y (5) errores típicos asociados a los factores de primer orden. Los resultados dan apoyo a la invarianza factorial en las medidas de P y O en el tercer nivel. Por último, se discuten las implicaciones del enfoque propuesto para los estudios de ajuste P-A (AU)


Humans , Factor Analysis, Statistical , Organizational Affiliation , Adaptation, Psychological , Psychometrics/instrumentation , Analysis of Variance , Social Identification , Group Processes
...